The Greedy Miser: Learning under Test-time Budgets
نویسندگان
چکیده
As machine learning algorithms enter applications in industrial settings, there is increased interest in controlling their cpu-time during testing. The cpu-time consists of the running time of the algorithm and the extraction time of the features. The latter can vary drastically when the feature set is diverse. In this paper, we propose an algorithm, the Greedy Miser, that incorporates the feature extraction cost during training to explicitly minimize the cpu-time during testing. The algorithm is a straightforward extension of stagewise regression and is equally suitable for regression or multi-class classification. Compared to prior work, it is significantly more cost-effective and scales to larger data sets.
منابع مشابه
Scalable Influence Maximization for Multiple Products in Continuous-Time Diffusion Networks
A typical viral marketing model identifies influential users in a social network to maximize a single product adoption assuming unlimited user attention, campaign budgets, and time. In reality, multiple products need campaigns, users have limited attention, convincing users incurs costs, and advertisers have limited budgets and expect the adoptions to be maximized soon. Facing these user, monet...
متن کاملWhatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger
Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations. In this paper, we investigate the robustness and resilience of deep RL to training-time and test-time attacks. Through experimental results, we demonstrate that under noncontiguous trainingtime attacks, Deep Q-Network (DQN) agents can recover a...
متن کاملRevenue Maximizing Auction when Bidders have Private Budgets
ABSTRACT We tackle the problem of designing revenue maximizing auctions in the Bayesian framework, when bidders not only have private valuations but also private budgets. We consider the setting of selling divisible goods to multiple agents each with linear utilities, but agents cannot pay beyond their budget. We focus on the case when the auctioneer can check that bidders do not over-report th...
متن کاملEfficient Feature Group Sequencing for Anytime Linear Prediction
We consider anytime linear prediction in the common machine learning setting, where features are in groups that have costs. We achieve anytime (or interruptible) predictions by sequencing the computation of feature groups and reporting results using the computed features at interruption. We extend Orthogonal Matching Pursuit (OMP) and Forward Regression (FR) to learn the sequencing greedily und...
متن کاملLocal learning by partitioning
In many machine learning applications data is assumed to be locally simple, where examples near each other have similar characteristics such as class labels or regression responses. Our goal is to exploit this assumption to construct locally simple yet globally complex systems that improve performance or reduce the cost of common machine learning tasks. To this end, we address three main proble...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012